13 - Artificial Intelligence I [ID:49632]
50 von 557 angezeigt

Okay, micro on? Good. So, it's cold outside, you can see, and early. So, we were talking

about constraint satisfaction problems, and the idea is that we're using factored representations.

That means the states are not black box anymore, but they are described in terms of features.

If you think about Sudoku, the 81 important features are where numbers can be.

In the left-hand corner, one next to it, and so on. And we're basically making a representation

of this in terms of variables, variables that can have values. We're making kind of, we're representing

the value of the feature by providing these variables. And then we make a search process

in filling this out, so that we do not violate the rules of the states. The rules of the states,

we call constraints. The features we're formulating as variables, because we do not know what the

variable, what the value of the feature is. And every variable has a domain. This is the main idea.

And we've kind of boiled it down, so that we can get by with binary constraints. And that is what

we've kind of put into this notion of a constraint network. A constraint network, again, is a triple

made out of a set of variables, their domains, whatever it be, but they should be finite. And a set of

constraints, and constraints are actually relations between the domains. They're binary. The only thing

we insist on here is that, from either way, either looked as a constraint on you

given V, or the other way around, the same thing happens here. So that's the mathematical structure

we're dealing with. We've defined the concept of a solution, which is really centered about satisfying

a constraint. And since the constraint is a relation, which is a set of input-output pairs, a pair of assigned

values has to be in the relation. Easy peasy. And that allows us to say when an assignment, a variable

assignment, a partial function from variable names into the respective domains, actually satisfies the constraints.

And now that allows us to basically define solutions, namely total satisfying variable

assignments, but it also gives us a way of thinking of this as a kind of a meta-search problem,

namely starting with the empty assignment, which is trivially satisfying, because it assigns nothing,

so there's nothing to violate. We start with the empty assignment and we extend the empty assignment

with new input-output pairs, and that way in a meta-search problem, which we're going to use,

which we're going to solve with depth-first search, backtracking by depth-first search,

we have a new solution method. Yes?

Yes.

Since essentially we're defining satisfaction, constraint satisfaction, one assignment at a time,

it also makes sense for partial assignments. So the solutions have to be total, but during our search

we want to work with partial assignments, and that is actually the main motor here.

And that's the search, right? We choose a variable, assign a value, see whether it's still satisfying,

and then we do it, we iterate, and the solutions are always at depth n, n is the

number of variables. So this is wonderful, as if we made this for backtracking search.

Okay, we looked at a couple of examples, and the important thing is we looked at a couple of heuristics.

These heuristics, by and large, are so good that there have to be good reasons not to use them.

In some situations, if we know a lot about the structure, we might have better solutions,

better heuristics, but if you don't really know much about the problem, these are the things you

want to use. And there are three of them, one is the minimum remaining value heuristic, which helps

you choose the variables. This has one problem, kind of in the early game, this doesn't really tell us

much. There's no help in this situation, and there's no help in that situation.

In these situations where we have not quite enough information, we actually use the degree

heuristic as a tiebreaker. So together those actually give us something that works quite well.

And the last thing is we've chosen a variable, what is the best value if we have a choice,

and here we want to have the least constraining value heuristic. The idea is that we want to get

to a solution, so we want to have as much choice in the value, not in the variable, as possible.

And that's how far we got, are there any questions?

Yes.

Yes, they work for every, they only talk about are there constraints and how many of them are there

and what is left over in the values and so on. So you can apply them to everything. Whereas other

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:30:53 Min

Aufnahmedatum

2023-11-29

Hochgeladen am

2023-11-29 18:29:05

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen